AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
4-bit Quantization for Efficient Inference

# 4-bit Quantization for Efficient Inference

Qwen2 VL 7B Latex OCR
Apache-2.0
A fine-tuned version of the Qwen2-VL-7B model, trained using Unsloth and Huggingface TRL library, achieving 2x inference speed improvement.
Text-to-Image Transformers English
Q
erickrus
35
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase